04. Exercise 1: Fun with Convolutions

Intro

Exercise 1: Fun with Convolutions

In this exercise, you will get a chance to write some PyTorch code to compute various types of convolutions in a Jupyter Notebook. But before playing with it, I would like to get you started and walk you through an introductory section of the notebook.

ND320 C3 L3 03.1 Exercise- Convolutions

Introduction

Now that you have seen how to use PyTorch to apply a convolutional filter to a 2D image, I want you to try and implement several convolution operations and apply them to a 3D CT volume. As usual, use the workspace below to work on the exercise.

Udacity Workspace Note: this workspace is a Jupyter Notebook that everything is set up and ready for you to use. This is a GPU workspace but you shouldn't need it for any training so disable it for this exercise.

Code

If you need a code on the https://github.com/udacity.

Solution summary

You can find the solution for the Exercise 1: Fun with Convolutions here.
The solution is presented as Jupyter Notebook with some additional inline comments. Note that there are some design decisions you could take with 2.5D convolutions - you can choose different patch size or choose to combine the neighboring slices differently. Ultimately, this is a compromise between memory use and the information that you give to your network. It is useful to track the number of trainable parameters of your network (which is largely driven by convolutional operations) to get an idea of total memory needed for training.